9 research outputs found

    Game Theoretic Resistance to Denial of Service Attacks Using Hidden Difficulty Puzzles

    Get PDF
    Denial of Service (DoS) vulnerabilities are one of the major concerns in today’s Internet. Client-puzzles offer a good mechanism to defend servers against DoS attacks. In this paper, we introduce the notion of hidden puzzle difficulty, where the attacker cannot determine the difficulty of the puzzle without expending a minimal amount of computational resource. Game theory is used to develop defense mechanisms, which make use of such puzzles. New puzzles that satisfy the requirements of the defense mechanisms have been proposed. We also show that our defense mechanisms are more effective than the ones proposed in the earlier work by Fallah

    Undermining User Privacy on Mobile Devices Using AI

    Full text link
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to the privacy of mobile phone users. This is because applications leave distinct footprints in the processor, which can be used by malware to infer user activities. In this work, we show that these inference attacks are considerably more practical when combined with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with Deep Learning methods including Convolutional Neural Networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zeropermission App in well under a minute. The App thereby detects running applications with an accuracy of 98% and reveals opened websites and streaming videos by monitoring the LLC for at most 6 seconds. This is possible, since Deep Learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics such as random line replacement policies. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to implement and execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users

    ResourceFreeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense

    No full text
    Cloud computing promises great efficiencies by multiplexing resources among disparate customers. For example, Amazon’s Elastic Compute Cloud (EC2), Microsoft Azure, Google’s Compute Engine, and Rackspace Hosting all offer Infrastructure as a Service (IaaS) solutions that pack multiple customer virtual machines (VMs) onto the same physical server. The gained efficiencies have some cost: past work has shown that the performance of one customer’s VM can suffer due to interference from another. In experiments on a local testbed, we found that the performance of a cache-sensitive benchmark can degrade by more than 80 % because of interference from another VM. This interference incentivizes a new class of attacks, that we call resource-freeing attacks (RFAs). The goal is to modify the workload of a victim VM in a way that frees up resources for the attacker’s VM. We explore in depth a particular example of an RFA. Counter-intuitively, by adding load to a co-resident victim, the attack speeds up a class of cache-bound workloads. In a controlled lab setting we show that this can improve performance of synthetic benchmarks by up to 60 % over not running the attack. In the noisier setting of Amazon’s EC2, we still show improvements of up to 13%

    Cross Processor Cache Attacks

    No full text
    Multi-processor systems are becoming the de-facto standard across different computing domains, ranging from high-end multi-tenant cloud servers to low-power mobile platforms. The denser integration of CPUs creates an opportunity for great economic savings achieved by packing processes of multiple tenants or by bundling all kinds of tasks at vari-ous privilege levels to share the same platform. This level of sharing carries with it a serious risk of leaking sensitive information through the shared microarchitectural compo-nents. Microarchitectural attacks initially only exploited core-private resources, but were quickly generalized to re-sources shared within the CPU. We present the first fine grain side channel attack that works across processors. The attack does not require CPU co-location of the attacker and the victim. The novelty of the proposed work is that, for the first time the directory protocol of high efficiency CPU interconnects is targeted. The directory protocol is common to all modern multi-CPU systems. Examples include AMD’s HyperTransport, Intel’s Quickpath, and ARM’s AMBA Coherent Interconnect. The proposed attack does not rely on any specific characteristic of the cache hierarchy, e.g. inclusiveness. Note that in-clusiveness was assumed in all earlier works. Furthermore, the viability of the proposed covert channel is demonstrated with two new attacks: by recovering a full AES key in OpenSSL, and a full ElGamal key in libgcrypt within the range of seconds on a shared AMD Opteron server

    Undermining User Privacy on Mobile Devices Using AI

    No full text
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to user privacy. This is because applications leave distinct footprints in the processor, which malware can use to infer user activities. In this work, we show that these inference attacks can greatly be enhanced with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with deep learning methods including convolutional neural networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zero-permission app in well under a minute. The app detects running applications, opened websites, and streaming videos with up to 98% accuracy and a profiling phase of at most 6 seconds. This is possible, as deep learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users

    Security Issues and Challenges for Virtualization Technologies

    No full text
    corecore